Back to index

Systemantics: How Systems Work and Especially How They Fail

Authors: John Gall, John Gall

Overview

I wrote Systemantics: How Systems Work and Especially How They Fail to expose the inherent flaws and paradoxical behaviors of systems of all sizes. Primarily targeting those who design, manage, or interact with complex systems, I aim to equip readers with a pragmatic understanding of “how systems work,” or perhaps more accurately, “how they fail,” to help avoid common pitfalls. My work is especially relevant in an age of increasing system complexity, from large organizations and bureaucracies to technological systems and artificial intelligence, where unintended consequences and unexpected behaviors can have far-reaching implications. I offer a system-theoretic perspective – ‘systemantics’ – based on axioms and theorems, illustrated by real-world examples and case studies, or what I call “Horrible Examples.” I argue that systems don’t behave as expected, they behave as systems, prioritizing their own survival and growth, often contrary to their stated goals. This leads to a pervasive “grand illusion” of control, where those within systems develop delusional beliefs. I emphasize that successful system design requires understanding these inherent limitations, focusing on strategies like problem avoidance and creative adaptation, instead of the common, futile approach of “pushing on the system.” I explore communication failures and the decay of information within systems, stressing the importance of clear feedback and adaptation. Finally, I advocate a shift from ‘systemism’ (the naive belief in perfect systems) to a more pragmatic ‘systemantics’, emphasizing creative coping strategies in the face of inevitable system dysfunction, rather than seeking perfection. This pragmatic approach is essential for effectively navigating the complexities of our increasingly system-dominated world.

Book Outline

1. First Principles

Creating new systems doesn’t solve problems, it creates new ones inherent to the system itself. A system designed to address one problem always generates additional, system-related problems that must then be dealt with.

Key concept: New Systems Mean New Problems

2. Laws of Growth

Systems inherently grow and encroach, consuming more resources and taking over functions from other systems, sometimes to the point of absurdity.

Key concept: Systems Tend To Expand To Fill The Known Universe

3. The Generalized Uncertainty Principle

The behavior of complex systems is inherently unpredictable. Our best-laid plans tend to go awry due to unexpected factors and paradoxical outcomes, what I term the ‘Generalized Uncertainty Principle’.

Key concept: Complex Systems Exhibit Unexpected Behavior

4. A…B…C…Disaster (Feedback)

Systems tend to resist change and oppose their own functions, a principle exemplified by Le Chatelier’s Principle in chemistry and physics. Attempting to impose goals and objectives on a system can backfire, leading to decreased productivity and morale.

Key concept: The System Always Kicks Back

6. What Next? (The Life Cycle of Systems)

Systems often fail because they’re optimized for past challenges, becoming rigid and incapable of adapting to new situations. This is demonstrated by everything from military strategies to architecture to organizational design.

Key concept: The Army is Now Fully Prepared to Fight the Previous War

7. The Grand Illusion

The stated purpose of a system is often very different from what the system actually does, because the system itself takes on a life of its own, beyond the control or understanding of the individuals within it. This is termed the ‘Operational Fallacy’.

Key concept: People In Systems Do Not Do What The System Says They Are Doing

8. Inside Systems

Within a system, reality becomes what is reported to the system, not what’s actually happening in the outside world. This ‘Fundamental Law of Administrative Workings’ (F.L.A.W.) leads to a distorted view of reality.

Key concept: Things Are What They Are Reported To Be

9. Delusion Systems Versus Systems Delusions

People become so entrenched in the belief systems of the systems they’re in that they lose sight of reality. This creates ‘systems-delusions’ like believing that traveling to an airport is the same as reaching a city.

Key concept: Systems-delusions are the delusion systems that are almost universal in our modern world.

10. Systems-People

Systems not only shape the people within them but also attract individuals whose attributes and motivations align with the system’s dynamics, leading to specialized, and sometimes bizarre, behaviors.

Key concept: Systems attract Systems-People

12. Advanced Systems-Functions

Large, complex systems can operate in failure mode for extended periods without anyone noticing, because the system’s complexity obscures the malfunction.

Key concept: In Complex Systems, Malfunction and Even Total Non-Function May Not Be Detectable for Long Periods, If Ever

13. The System Knows (System Goals)

Systems prioritize their own survival and growth above any other goal, often acting in ways contrary to the stated purpose or the desires of those within them.

Key concept: Intrasystem Goals Come First

Essential Questions

1. Do new systems solve problems or create them?

The book argues that creating new systems doesn’t solve existing problems; it simply adds a new layer of system-related problems. This is because systems, as new entities, operate according to general laws that often lead to unexpected behavior and unintended consequences. The example of garbage collection illustrates this: creating a garbage collection system introduces problems like union negotiations, truck maintenance, and bureaucratic overhead, in addition to the original problem of garbage. This underscores the need to avoid creating unnecessary systems and to seek simpler solutions whenever possible.

2. Why is the behavior of complex systems unpredictable?

The behavior of complex systems is unpredictable due to factors like unexpected interactions, feedback loops, and emergent properties. The ‘Generalized Uncertainty Principle’ highlights the inherent paradox of systems, where actions can produce the opposite of the intended outcome. The example of insecticides causing ecological damage and insect resistance demonstrates this. This principle emphasizes the need for humility in dealing with systems and the importance of adapting to unexpected behavior rather than trying to control it.

3. Why don’t systems do what they say they are doing?

The stated purpose of a system often diverges from its operational function. This ‘Operational Fallacy’ arises because systems develop internal goals focused on self-preservation and growth, regardless of the intentions of their designers. The ‘Naming Fallacy’ further complicates matters, as the name of a function within a system doesn’t necessarily reflect what that function actually does. For example, a ‘Ministry of Truth’ may actually function to suppress dissent and spread propaganda. This underscores the need for skepticism in evaluating systems and the importance of looking beyond stated goals.

4. How do systems lose touch with reality?

Within large systems, information is often distorted, incomplete, or delayed, leading to a disconnect from reality. The ‘Fundamental Law of Administrative Workings’ (F.L.A.W.) states that ‘things are what they are reported to be,’ emphasizing how information filtering within systems creates a simplified, and often inaccurate, picture of the outside world. This, along with sensory deprivation within the system, contributes to systems-delusions and bizarre malfunctions. Recognizing this, managers must actively seek diverse sources of information and cultivate an awareness of the limitations of system-generated data.

5. How do systems attract and shape the people within them?

Systems tend to attract and retain individuals whose attributes align with the system’s dynamics. These ‘systems-people’ exhibit specialized behaviors and motivations, some beneficial to the system’s function and some parasitic, exploiting it for personal gain. This selection process, coupled with the isolating and reality-distorting effects of systems, can lead to bizarre and sometimes harmful behaviors. Understanding this dynamic is crucial for designing systems that discourage parasitic behavior and attract individuals who contribute positively to the system’s overall goals.

1. Do new systems solve problems or create them?

The book argues that creating new systems doesn’t solve existing problems; it simply adds a new layer of system-related problems. This is because systems, as new entities, operate according to general laws that often lead to unexpected behavior and unintended consequences. The example of garbage collection illustrates this: creating a garbage collection system introduces problems like union negotiations, truck maintenance, and bureaucratic overhead, in addition to the original problem of garbage. This underscores the need to avoid creating unnecessary systems and to seek simpler solutions whenever possible.

2. Why is the behavior of complex systems unpredictable?

The behavior of complex systems is unpredictable due to factors like unexpected interactions, feedback loops, and emergent properties. The ‘Generalized Uncertainty Principle’ highlights the inherent paradox of systems, where actions can produce the opposite of the intended outcome. The example of insecticides causing ecological damage and insect resistance demonstrates this. This principle emphasizes the need for humility in dealing with systems and the importance of adapting to unexpected behavior rather than trying to control it.

3. Why don’t systems do what they say they are doing?

The stated purpose of a system often diverges from its operational function. This ‘Operational Fallacy’ arises because systems develop internal goals focused on self-preservation and growth, regardless of the intentions of their designers. The ‘Naming Fallacy’ further complicates matters, as the name of a function within a system doesn’t necessarily reflect what that function actually does. For example, a ‘Ministry of Truth’ may actually function to suppress dissent and spread propaganda. This underscores the need for skepticism in evaluating systems and the importance of looking beyond stated goals.

4. How do systems lose touch with reality?

Within large systems, information is often distorted, incomplete, or delayed, leading to a disconnect from reality. The ‘Fundamental Law of Administrative Workings’ (F.L.A.W.) states that ‘things are what they are reported to be,’ emphasizing how information filtering within systems creates a simplified, and often inaccurate, picture of the outside world. This, along with sensory deprivation within the system, contributes to systems-delusions and bizarre malfunctions. Recognizing this, managers must actively seek diverse sources of information and cultivate an awareness of the limitations of system-generated data.

5. How do systems attract and shape the people within them?

Systems tend to attract and retain individuals whose attributes align with the system’s dynamics. These ‘systems-people’ exhibit specialized behaviors and motivations, some beneficial to the system’s function and some parasitic, exploiting it for personal gain. This selection process, coupled with the isolating and reality-distorting effects of systems, can lead to bizarre and sometimes harmful behaviors. Understanding this dynamic is crucial for designing systems that discourage parasitic behavior and attract individuals who contribute positively to the system’s overall goals.

Key Takeaways

1. Start Simple, Then Grow

The ‘Systems Law of Inertia’ states that systems continue to operate in the way they were designed, regardless of changing needs or circumstances. Complex systems, designed from scratch, rarely work as intended and cannot be easily fixed. Starting with a simpler, working system allows for incremental improvements and adaptations, increasing the likelihood of success. This avoids the “Perfectionist’s Paradox,” where striving for perfection in a complex initial design leads to failure.

Practical Application:

In AI development, avoid building overly complex systems from scratch. Instead, start with a simple, working AI model for a specific task, and gradually expand its capabilities and complexity as needed, based on feedback and testing.

2. Expect the Unexpected

The behavior of complex systems is unpredictable due to emergent properties, unexpected interactions, and the Generalized Uncertainty Principle. Systems rarely behave as planned, often producing surprising or counterintuitive results. This takeaway emphasizes the need for careful observation, experimentation, and a willingness to adapt to unforeseen outcomes when working with complex systems like AI.

Practical Application:

When designing AI systems, anticipate unintended consequences and unexpected behaviors. Test the system in diverse scenarios and gather feedback from multiple sources to identify potential problems or emergent properties that were not foreseen.

3. Define the Problem, Don’t Overengineer

Systems don’t inherently solve problems; they are someone’s attempt at a solution, which often creates new problems. The ‘Problem’ Problem is the tendency to address complex issues with equally complex solutions, further complicating matters. The takeaway emphasizes a more pragmatic approach: identify a specific, manageable problem, develop a targeted solution, and iterate based on real-world feedback.

Practical Application:

In AI product development, avoid the ‘Problem’ Problem by clearly defining the specific problem the AI is intended to solve. Resist the urge to build overly complex solutions. Focus on building a working system, even if imperfect, and iterate based on feedback.

4. Focus on System Behavior, Not Individual Roles

The roles and functions within a system may not align with the actual activities or contributions of the individuals within it. The ‘Operational Fallacy’ emphasizes that people in systems don’t always do what the system says they are doing. The system itself often functions in ways unforeseen by its designers. This necessitates focusing on overall system behavior and outcomes, not just individual performance.

Practical Application:

In managing AI teams, be mindful of the ‘Operational Fallacy’ by recognizing that individual contributions may not directly translate to overall system performance. Use clear metrics and feedback loops to evaluate the system’s effectiveness, rather than just individual efforts.

5. Embrace Feedback, Adapt Quickly

Systems that ignore feedback are prone to ‘Terminal Instability.’ Feedback is essential for systems to adapt to changing conditions and avoid catastrophic failure. However, feedback must be timely and appropriately tailored to the system’s dynamics. This takeaway highlights the importance of designing feedback loops into AI systems from the start and being prepared to adapt based on the feedback received.

Practical Application:

In AI product design, incorporate diverse feedback mechanisms to monitor system behavior in real time. Be prepared to adapt quickly to unexpected outcomes or errors, rather than rigidly adhering to the original design.

1. Start Simple, Then Grow

The ‘Systems Law of Inertia’ states that systems continue to operate in the way they were designed, regardless of changing needs or circumstances. Complex systems, designed from scratch, rarely work as intended and cannot be easily fixed. Starting with a simpler, working system allows for incremental improvements and adaptations, increasing the likelihood of success. This avoids the “Perfectionist’s Paradox,” where striving for perfection in a complex initial design leads to failure.

Practical Application:

In AI development, avoid building overly complex systems from scratch. Instead, start with a simple, working AI model for a specific task, and gradually expand its capabilities and complexity as needed, based on feedback and testing.

2. Expect the Unexpected

The behavior of complex systems is unpredictable due to emergent properties, unexpected interactions, and the Generalized Uncertainty Principle. Systems rarely behave as planned, often producing surprising or counterintuitive results. This takeaway emphasizes the need for careful observation, experimentation, and a willingness to adapt to unforeseen outcomes when working with complex systems like AI.

Practical Application:

When designing AI systems, anticipate unintended consequences and unexpected behaviors. Test the system in diverse scenarios and gather feedback from multiple sources to identify potential problems or emergent properties that were not foreseen.

3. Define the Problem, Don’t Overengineer

Systems don’t inherently solve problems; they are someone’s attempt at a solution, which often creates new problems. The ‘Problem’ Problem is the tendency to address complex issues with equally complex solutions, further complicating matters. The takeaway emphasizes a more pragmatic approach: identify a specific, manageable problem, develop a targeted solution, and iterate based on real-world feedback.

Practical Application:

In AI product development, avoid the ‘Problem’ Problem by clearly defining the specific problem the AI is intended to solve. Resist the urge to build overly complex solutions. Focus on building a working system, even if imperfect, and iterate based on feedback.

4. Focus on System Behavior, Not Individual Roles

The roles and functions within a system may not align with the actual activities or contributions of the individuals within it. The ‘Operational Fallacy’ emphasizes that people in systems don’t always do what the system says they are doing. The system itself often functions in ways unforeseen by its designers. This necessitates focusing on overall system behavior and outcomes, not just individual performance.

Practical Application:

In managing AI teams, be mindful of the ‘Operational Fallacy’ by recognizing that individual contributions may not directly translate to overall system performance. Use clear metrics and feedback loops to evaluate the system’s effectiveness, rather than just individual efforts.

5. Embrace Feedback, Adapt Quickly

Systems that ignore feedback are prone to ‘Terminal Instability.’ Feedback is essential for systems to adapt to changing conditions and avoid catastrophic failure. However, feedback must be timely and appropriately tailored to the system’s dynamics. This takeaway highlights the importance of designing feedback loops into AI systems from the start and being prepared to adapt based on the feedback received.

Practical Application:

In AI product design, incorporate diverse feedback mechanisms to monitor system behavior in real time. Be prepared to adapt quickly to unexpected outcomes or errors, rather than rigidly adhering to the original design.

Memorable Quotes

Chapter 1 - First Principles. 29

New systems mean new problems.

Chapter 2 - Laws of Growth. 31

Systems tend to expand to fill the known universe.

Chapter 3 - The Generalized Uncertainty Principle. 35

Complex systems exhibit unexpected behavior.

Chapter 4 - A…B…C…Disaster (Feedback). 41

The system always kicks back.

Chapter 7 - The Grand Illusion. 52

People in systems do not do what the system says they are doing.

Chapter 1 - First Principles. 29

New systems mean new problems.

Chapter 2 - Laws of Growth. 31

Systems tend to expand to fill the known universe.

Chapter 3 - The Generalized Uncertainty Principle. 35

Complex systems exhibit unexpected behavior.

Chapter 4 - A…B…C…Disaster (Feedback). 41

The system always kicks back.

Chapter 7 - The Grand Illusion. 52

People in systems do not do what the system says they are doing.

Comparative Analysis

Systemantics stands apart with its focus on the inherent, natural laws that govern system behavior, regardless of human intentions or competence. This contrasts with traditional management theories that often emphasize human factors like leadership or efficiency as primary drivers of success. While works like The Peter Principle explore the dynamics of incompetence within hierarchies, they don’t address the larger systemic issues that Gall emphasizes. Works in cybernetics, like those by Ashby, Wiener, and von Bertalanffy, come closer to Gall’s perspective, acknowledging the complex and often unpredictable nature of systems. However, Systemantics differentiates itself with its highly pragmatic, almost satirical approach, using ‘Horrible Examples’ to drive home its points, rather than the more formal, mathematical treatment typical of cybernetics. This makes Systemantics more accessible to a wider audience and more directly applicable to everyday problems.

Reflection

Systemantics offers a valuable critique of our tendency towards ‘systemism,’ the blind faith in systems as solutions to all problems. While Gall’s satirical style can be engaging, it sometimes oversimplifies the complexities of system behavior. For example, while ‘systems-people’ certainly exist, it is an overstatement to say that systems only attract such individuals. Also, the idea that all systems are inherently flawed needs to be taken with a grain of salt – the failure of a specific voting machine doesn’t doom all technology. However, Gall’s core message about the inherent limitations and unexpected behaviors of systems is vitally relevant, especially in the context of AI. As we build increasingly complex AI systems, it’s crucial to be aware of the potential for unintended consequences, feedback loops, and emergent properties that can lead to unpredictable outcomes. Systemantics provides a framework for approaching AI development with a healthy dose of skepticism and a focus on adaptability, rather than striving for unrealistic goals of perfect control or optimization.

Flashcards

What is the Operational Fallacy?

A system’s behavior is determined by its internal structure and operating principles, which may have little to do with its stated purpose.

What is the general law of systems growth?

The tendency of systems to grow and encroach, often to the point of absurdity.

What is the Fundamental Law of Administrative Workings (F.L.A.W.)?

Within a system, reality is what is reported to the system, not necessarily what’s happening in the outside world.

What is the primary goal of a system?

Systems prioritize their own survival and growth over all other goals.

What is the ‘Problem’ Problem?

Systems don’t solve problems; they are someone’s solution to a problem, and that solution often creates new problems.

What are the limits of grandiosity?

In dealing with complex systems, small, incremental changes are often more effective than grand, sweeping designs.

What is Perfectionist’s Paradox?

Attempting to perfect a system can actually hinder its functionality and create new problems.

What is the Generalized Uncertainty Principle?

Systems are inherently unpredictable and exhibit unexpected behaviors. Be prepared for surprises!

What is Reframing?

The act of redefining a problem or situation in a new, more productive frame of reference.

What is the Fundamental Failure Theorem?

Any system can fail, and large systems tend to operate in failure mode most of the time.

What is the Operational Fallacy?

A system’s behavior is determined by its internal structure and operating principles, which may have little to do with its stated purpose.

What is the general law of systems growth?

The tendency of systems to grow and encroach, often to the point of absurdity.

What is the Fundamental Law of Administrative Workings (F.L.A.W.)?

Within a system, reality is what is reported to the system, not necessarily what’s happening in the outside world.

What is the primary goal of a system?

Systems prioritize their own survival and growth over all other goals.

What is the ‘Problem’ Problem?

Systems don’t solve problems; they are someone’s solution to a problem, and that solution often creates new problems.

What are the limits of grandiosity?

In dealing with complex systems, small, incremental changes are often more effective than grand, sweeping designs.

What is Perfectionist’s Paradox?

Attempting to perfect a system can actually hinder its functionality and create new problems.

What is the Generalized Uncertainty Principle?

Systems are inherently unpredictable and exhibit unexpected behaviors. Be prepared for surprises!

What is Reframing?

The act of redefining a problem or situation in a new, more productive frame of reference.

What is the Fundamental Failure Theorem?

Any system can fail, and large systems tend to operate in failure mode most of the time.